Digital Privacy

Online safety needs structural change, not more layers of control


The Online Safety Act has grown out of a core issue for our times: who controls the digital world children grow up in? Parents and young people want safer online spaces, yet citizens have little real power to shape them.

the online safety act isn’t working

Write to your MP

A handful of companies dominate the Internet’s infrastructure, controlling how we access platforms and the content we see. They seek to maximise profit through digital advertising and we are the product. An internal Meta email put a price on teenagers: “The lifetime value of a 13 y/o teen is roughly $270 per teen.” Google, Meta, and Amazon are estimated to generate over 60% of global digital advertising revenue. This is achieved through keeping us on their platforms, maximising engagement and targeting advertising. This is often in direct conflict with the kind of Internet experience that most of us want.

When progressing through Parliament, the Online Safety Act grew and grew as safety campaigners highlighted more and more harms. This expansion has continued since the Act came into force. Proposals announced this week include expanding age checks for AI chat bots, features such as ‘infinite scrolling’ and looking at how software can restrict suspected CSAM images from even being sent or received on a device (which sounds like client-side scanning).

But this creates a regulatory treadmill. Harms evolve faster than regulation. Like the Hydra of Greek mythology, cutting off one head produces two more. The problem isn’t that the Online Safety Act keeps missing harms — it’s because of the underlying market structure that rewards harmful outcomes.

Campaigners believe that compelling Internet users to verify their age before they can access content will keep children safe. Adults are not just expected to prove their age to access pornography sites but also to use dating apps, to send direct messages on Bluesky, to watch certain videos on Spotify, and, later this year, to allow interactions on live streamed. The latest proposals suggesting age verification to access Virtual Private Networks. It is becoming impossible for adults in the UK to avoid age verification if they want to use the Internet.

This means that we have to share sensitive information, including biometric face scans or official identity documents with companies that provide age verification. Many have terms and conditions that allow platforms to use this data beyond age verification. We don’t have a choice about which age verification provider is used – that’s down to the platforms, who often favour the cheap option. And despite the government’s claim that it wants to keep us safe online, age verification providers are not properly regulated.

Safety campaigners say this is worth it but age verification brings personal, social and economic costs that include:

Data breaches
Hacks could put you and your family at risk. Discord is one example of a platform that suffered a major data-leak as a result of the introduction of age-assurance.

Keeping you online
Platforms can use the data that you shared for age verification to send you ads and customise content. This makes the harms we experience online worse.

Feeding the adtech system
Data provided for age verification is shared across the advertising and data broker ecosystem and used to target ads at you, some of which could be harmful.

Normalising surveillance
Children risk growing up in systems where proving identity becomes routine and they are forced to accept permanent monitoring. Surveillance technologies move between military, intelligence, and commercial markets. For example, figures associated with the growth of modern surveillance technology, such as Peter Thiel, who co-founded Palantir, have also invested in consumer identity and verification companies, including firms operating in the age assurance space.

Normalising harms for adults
Age verification as a solution fails to tackle online harms once someone has turned 18.

Environmental costs
Age assurance systems require constant maintenance, they have a carbon footprint.

Consumer costs
Platforms have to pay a per assurance costs and expose themselves to new data protection risks. Platforms pass these costs onto users in the form of more online advertising, the removal of free easy to access services, and higher subscription fees.

Restricting public debate
Examples such as how proscription of Palestine Action interacts with the Online Safety Act demonstrate how safety measures can reduce people’s ability to communicate freely online. The opaque nature of how content restriction decisions helps to fuel conspiracy theories. We pay the cost of safety with our freedom to speak, and suspicion around the motives behind content restrictions.

Making you less safe
VPNs protect privacy, security, and free expression. There is little evidence that young people are using VPNs to by-pass age verification imposed by the Online Safety Act. So age gating these systems will have little impact on their online safety but it may deter adults from using them. Restricting VPNs also gives political cover to oppressive regimes who do the same thing for political control.

Keeping Big Tech in power
Large companies can absorb legal and engineering overheads. Smaller services often cannot. This reduces competition and forces us to use dominant platforms. Regulatory and structural barriers to entry isn’t a new or novel concept.

Safety improves when users, parents, educators, and public-interest institutions have meaningful influence over digital environments.

While individuals in large platforms might say they care about children being safe their primary motivation is maximising advertising revenue. With that conflict in their objectives their safety features will always be minimal if they conflict with keeping your attention.

One way to reduce the conflict between platforms interests and their users is through interoperability — the ability to move between services and communicate across them.

You are not truly free in a space if you cannot leave it. But if you can leave, then platforms start to worry about what you want and need.

Interoperability lowers switching costs and increases competition. Competition creates economic pressure to build safer, more user-respecting environments. Users should be able to take their contacts and communities with them, just as they can switch banks or mobile providers.

Public institutions have a crucial role. When governments and schools rely exclusively on dominant commercial platforms, they reinforce their market power. By supporting open and decentralised alternatives — such as services built on interoperable networks like the Fediverse — the public sector could help create viable competition.

Even modest institutional adoption would send a strong signal. Structural change often begins with early public leadership.

Fix the Online Safety Act